Feathering

Goals:

  • Describe and demonstrate feathering process
  • Show some of the free parameters that can be selected when feathering
  • Show various visualizations that can be used to assess goodness-of-merge
    • PSD plots
    • Residuals from theoretical
  • Demonstrate limitations of image combination
  • Show tools for simulating realistic synthetic data

What is feathering?

Fourier-space (UV-space) weighted averaging of two data sets.

The weight function is selected to match the Fourier transform of the single-dish telescope's beam.

Feathering (combination) tests

This notebook presents a series of experiments in single-dish + interferometer combination on "realistic" data.

We're "observing" at 2mm, so a 12m dish has a FWHM=40" Gaussian beam and a 9m baseline has a sharp cutoff at 56"

Requirements for this work: turbustat generates our synthetic data and helps with power-spectral-density (PSD) plotting. astropy.convolution provides access to convolution tools, and uvcombine is our python-only implementation of feather.

https://turbustat.readthedocs.io/en/latest/

from turbustat.simulator.gen_field import make_extended
from turbustat.statistics import psds
from astropy import convolution, units as u
import numpy as np
from uvcombine.uvcombine import feather_kernel, fftmerge
WARNING: AstropyDeprecationWarning: astropy.extern.six will be removed in 4.0, use the six module directly if it is still needed [astropy.extern.six]

We create a synthetic power-law power-spectrum image. This sort of image is typical of a dust image of the Galactic plane, for example.

# create an input image with specified parameters
# (this can later be modified - it will be good to examine the effects of
# different power laws, different types of input...)
# We're assuming a scale of 1"/pixel for this example
imsize = 512
im = make_extended(imsize=imsize, powerlaw=1.5, randomseed=0)
# the real sky is positive, so we subtract the minimum to force the overall image positive
im = im - im.min()

Input Image Visualization

This is the input image along with its histogram.

The power spectrum of the input image (set to be $\alpha=1.5$), verifying that the turbustat code works

Next, we create our simulated interferometer by creating a UV domain and selecting which pixels in that domain will be part of our telescope. This process creates an ideal interferometer.

# set up the grid
ygrid, xgrid = np.indices(im.shape, dtype='float')
rr = ((xgrid-im.shape[1]/2)**2+(ygrid-im.shape[0]/2)**2)**0.5
# Create a UV sampling mask.
# This removes all large-angular scale (r<8) features *in UV space* and all
# small angular scales.
# In fourier space, r=0 corresponds to the DC component
# r=1 corresponds to the full map (one period over that map)
# r=256 is the smallest angular scale, which is 2 pixels
# We're assuming a pixel scale of 1" / pixel
# therefore 56" corresponds to 9m at 2mm (i.e., nearly the closest spacing possible for 7m)
# We cut off the "interferometer" at 2.5" resolution
largest_scale = 56.*u.arcsec
smallest_scale = 2.5*u.arcsec
pixscale = 1*u.arcsec
image_scale = im.shape[0]*pixscale # assume symmetric (default=256)
ring = (rr>=(image_scale/largest_scale)) & (rr<=(image_scale/smallest_scale))

Synthetic Perfect Interferometer

The synthetic interferometer's UV coverage map (it's a perfect interferometer)

Next, we create the interferometric map by multiplying our interferometer mask by the fourier transform of the sky data

# create the interferometric map by removing both large and small angular
# scales in fourier space
imfft = np.fft.fft2(im)
imfft_interferometered = imfft * np.fft.fftshift(ring)
im_interferometered = np.fft.ifft2(imfft_interferometered)

The interferometric image does not preserve total flux, as expected. Note that the mean of the histogram is shifted.

The residual of the original image minus the interferometrically observed image. The large scales and noise are preserved.

Synthetic Single Dish

The single dish map is just a convolution of the original data with a Gaussian beam. It preserves flux but loses small scales.

# create the single-dish map by convolving the image with a FWHM=40" kernel
# (this interpretation is much easier than the sharp-edged stuff in fourier
# space because the kernel is created in real space)
lowresfwhm = 40*u.arcsec
singledish_kernel = convolution.Gaussian2DKernel(lowresfwhm/pixscale/2.35, x_size=im.shape[1], y_size=im.shape[0])
singledish_kernel_fft = np.fft.fft2(singledish_kernel)
singledish_im = convolution.convolve_fft(im,
                                         kernel=singledish_kernel,
                                         boundary='fill',
                                         fill_value=im.mean())

The single-dish image and its histogram

The single dish in Fourier space

We show the single dish beam in Fourier space with the interferometer coverage range overlaid

Feathering

Feathering is the combination of the single-dish image with the interferometric image in the UV domain.

In the uvcombine package, this is handled by uvcombine.feather_simple. However, we show the components of that function here.

For comparison, CASA's feather takes these inputs:

#  feather :: Combine two images using their Fourier transforms
imagename           =         ''        #  Name of output feathered image
highres             =         ''        #  Name of high resolution (interferometer) image
lowres              =         ''        #  Name of low resolution (single dish) image
sdfactor            =        1.0        #  Scale factor to apply to Single Dish image
effdishdiam         =       -1.0        #  New effective SingleDish diameter to use in m
lowpassfiltersd     =      False        #  Filter out the high spatial frequencies of the SD image

First, we define the FWHM of the low-resolution (single-dish) image, which defines the effective cutoff point between the interferometer and the single dish. lowresfwhm is equivalent to effdishdiam from CASA, albeit in different units.

The kernels weight arrays for the single-dish and interferometer data. They are the fourier transforms of the low-resolution beam and (1-that kernel), respectively.

# pixel scale can be interpreted as "arcseconds"
# then, fwhm=40 means a beam fwhm of 40"
pixscale = 1*u.arcsec
lowresfwhm = 40*u.arcsec
nax1,nax2 = im.shape
kfft, ikfft = feather_kernel(nax2, nax1, lowresfwhm, pixscale,)

Then we specify a few parameters that are not all available in CASA. CASA's lowpassfiltersd is equivalent to our replace_hires, and their sdfactor is our lowresscalefactor. The other parameters, highresscalefactor and deconvSD are unavailable in CASA.

# Feather the interferometer and single dish data back together
# This uses the naive assumptions that CASA uses
# However, there are a few flags that can be played with.
# None of them do a whole lot, though there are good theoretical
# reasons to attempt them.
im_hi = im_interferometered.real
im_low = singledish_im
lowresscalefactor=1
replace_hires = False
highpassfilterSD = False
deconvSD = False
highresscalefactor=1
fftsum, combo = fftmerge(kfft, ikfft, im_hi*highresscalefactor,
                         im_low*lowresscalefactor,
                         replace_hires=replace_hires,
                         highpassfilterSD=highpassfilterSD,
                         deconvSD=deconvSD,
                        )

The feathered data set looks pretty good.

This image looks pretty close to the original, but the peaks and valleys certainly are not recovered (the contrast is reduced compared to the original).

We then compare the feathered data to the input image.

The difference between the input image and the feathered image shows the remainder artifacts.

If we repeat the same experiment again, but with the shortest baseline at 12m instead of 9m, the results are noticeably worse:

largest_scale = 42.*u.arcsec # (1.22 * 2*u.mm/(12*u.m)).to(u.arcsec, u.dimensionless_angles())
smallest_scale = 2.5*u.arcsec
image_scale = im.shape[0]*pixscale # assume symmetric (default=256)
ring = (rr>=(image_scale/largest_scale)) & (rr<=(image_scale/smallest_scale))
# create the interferometric map by removing both large and small angular
# scales in fourier space
imfft = np.fft.fft2(im)
imfft_interferometered = imfft * np.fft.fftshift(ring)
im_interferometered = np.fft.ifft2(imfft_interferometered)
im_hi = im_interferometered.real
lowresscalefactor=1
replace_hires = False
highpassfilterSD = False
deconvSD = False
highresscalefactor=1
fftsum, combo = fftmerge(kfft, ikfft, im_hi*highresscalefactor,
                         im_low*lowresscalefactor,
                         replace_hires=replace_hires,
                         highpassfilterSD=highpassfilterSD,
                         deconvSD=deconvSD,
                        )

The feathered image with 12m shortest baselines instead of 9m

The side-by-side images comparison again, but with 12m instead of 9m shortest baselines

The difference image (original - feathered w/12m shortest baselines)

It is more helpful to look at the difference in power spectra. Note that the axes are log-scaled.

The components that go in to the feather can help clarify the picture

Different combinations of parameters can yield very different results.

We have 8 parameter combinations:

  • Replace singledish-interferometer overlap w/interferometer, or average them
  • Filter out the small angular scales from the single dish or don't
  • Deconvolve the single dish (direct deconvolution) or don't

The next slides are a walkthrough of parameter exploration with different effective dish sizes.

First, we have a 12m single-dish combined with an interferometer with shortest baseline length of 12m.

Residuals are in blue.

A short-spacing limit of 12m for the interferometer and dish size of 12m is a bad case, but not the worst.

ALMA's shortest baselines are 14.6m (main array) and 8.7m (7m array). L05, the 5th percentile baseline length, is 21.4m (9.1m) for the 12m (7m) array.

A realistic case for ALMA is then a 9m shortest baseline and a 12m effective single dish.

Residuals are in blue.

This realistic case is still bad. The "best case", in which we have good UV overlap (e.g., a 24m dish instead of a 12m), actually can get good theoretical recovery

Residuals are in blue.

CASA's default parameters are fine, but for best performance, the single-dish image should be deconvolved.

The appropriate choice of feather parameters depends (mildly) on the UV overlap:

  • If the single dish is substantially bigger than the shortest baseline, CASA's defaults or replacing short-spacing with deconvolved single-dish both work well
  • If the single dish is comparable to the shortest baseline, the best results come from deconvolving the single-dish data and weighted-averaging them with the interferometric

These feather experiments represent the absolute best-case scenario. They should be used as references for comparison with any other combination technique.

Non-ideal cases

Besides simple UV coverage problems (which are expensive to fix and usually out of our control), there are other issues:

  • Relative calibration
  • Relative sensitivity

Relative calibration is an issue if the data simply aren't calibrated perfectly (systematic uncertainties are usually 5-15%) or if the single-dish data come from a different frequency range (e.g., using wide-band bolometer data to fill in the continuum zero-spacing).

We can simulate calibration error using the highresscalefactor and lowresscalefactor parameters, which are included to correct for these errors

lowresscalefactor = 1.50
highresscalefactor = 1.0
fftsum, combo = fftmerge(kfft, ikfft, im_hi*highresscalefactor,
                         im_low*lowresscalefactor,
                         replace_hires=replace_hires,
                         highpassfilterSD=highpassfilterSD,
                         deconvSD=deconvSD,
                        )
lowresscalefactor = 1.0
highresscalefactor = 1.50
fftsum, combo = fftmerge(kfft, ikfft, im_hi*highresscalefactor,
                         im_low*lowresscalefactor,
                         replace_hires=replace_hires,
                         highpassfilterSD=highpassfilterSD,
                         deconvSD=deconvSD,
                        )

Relative sensitivity

The cases we've treated above assume that the interferometer and single-dish telescope have infinite sensitivity; the "noise" in the image is actually part of the sky. So, what happens if we add observational (as opposed to astrophysical) noise?

import radio_beam
import astropy.utils.misc
lowresscalefactor = 1.0
highresscalefactor = 1.0
sd_beam = radio_beam.Beam(lowresfwhm)
sd_beam_volume = (sd_beam.sr / pixscale**2).decompose()
with astropy.utils.misc.NumpyRNGContext(0):
    noise_sd = convolution.convolve_fft(singledish_im.max() * 0.0002 * np.random.randn(*singledish_im.shape) * sd_beam_volume,
                                        sd_beam.as_kernel(pixscale))
noisy_sd = (singledish_im + noise_sd)
fftsum, combo = fftmerge(kfft, ikfft, im_hi*highresscalefactor,
                         noisy_sd*lowresscalefactor,
                         replace_hires=replace_hires,
                         highpassfilterSD=highpassfilterSD,
                         deconvSD=deconvSD,
                        )

Now we'll do the same experiment with noise added to the interferometric image too

lowresscalefactor = 1.0
highresscalefactor = 1.0
intf_beam = radio_beam.Beam(u.Quantity(smallest_scale, u.arcsec))
intf_beam_volume = (intf_beam.sr / pixscale**2).decompose()
assert intf_beam_volume < 100
with astropy.utils.misc.NumpyRNGContext(0):
    noise_intf = convolution.convolve_fft(im_interferometered.real.max() * 0.02 * np.random.randn(*im_interferometered.shape) * intf_beam_volume,
                                        intf_beam.as_kernel(pixscale))
noisy_intf = (im_interferometered.real + noise_intf)
fftsum, combo = fftmerge(kfft, ikfft, noisy_intf*highresscalefactor,
                         noisy_sd*lowresscalefactor,
                         replace_hires=replace_hires,
                         highpassfilterSD=highpassfilterSD,
                         deconvSD=deconvSD,
                        )

Noisy data are not such a problem for the interferometer, but they're a pretty big deal for the single-dish data, since the smallest single-dish scales overlap with the interferometer scales.

Additionally, noise in the interferometric data set contributes to the single-dish scales.

 
 
 

Can we have an immerge-like capability in uvcombine?

We made the feather_compare function to try to determine the relative scaling factor between the interferometer and single-dish images. In this section, we'll show that for theoretically perfect data, the approach works.

Because of the design of feather_compare, we need to assign FITS headers to our datasets.

For the next experiment, we assume we have an interferometer that covers largest scales that have some overlap with the smallest scale of the single-dish data.

# See above
largest_scale = 80.*u.arcsec
smallest_scale = 2.5*u.arcsec
image_scale = im.shape[0]*pixscale # assume symmetric (default=256)
ring = (rr>=(image_scale/largest_scale)) & (rr<=(image_scale/smallest_scale))

imfft = np.fft.fft2(im)
imfft_interferometered = imfft * np.fft.fftshift(ring)
im_interferometered = np.fft.ifft2(imfft_interferometered)
header = fits.Header({'CRVAL1':0.0, 'CRVAL2':0.0, 'CDELT1':1./3600., 'CDELT2':1./3600., 'CRPIX1':1.0, 'CRPIX2':1.0,
                      'CTYPE1':'GLON-CAR', 'CTYPE2':'GLAT-CAR'})
sd_hdu = fits.PrimaryHDU(data=singledish_im, header=header)
intf_hdu = fits.PrimaryHDU(data=im_interferometered.real, header=header)

We'll exhibit a few parameters.

First, beam_divide_lores is a fourier-space deconvolution to try to remove the effect of the single-dish beam in foureir space. We'll start by leaving this off.

Scales to compare: 40.0 arcsec-80.0 arcsec2
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):
Reported statistics of the ratio of the high-res to low-res data: {'median': 4.567690246649106, 'mean': 5.441820499553499, 'std': 4.016856772569976, 'mean_sc': 4.592593378913974, 'median_sc': 4.260188252026101, 'std_sc': 2.1000574949275292}

You can see the result, the high-res is over-compsenated by a factor of 3-6 or so. The black line in the lower panel shows the effect of the beam in fourier space: it suppresses the intensity.

We can also see that this is a frustratingly noisy process.

print("Scales to compare: {0}-{1}".format(lowresfwhm*u.arcsec, largest_scale*u.arcsec))
stats = feather_compare(intf_hdu,
                        sd_hdu,
                        SAS=u.Quantity(lowresfwhm, u.arcsec),
                        LAS=u.Quantity(largest_scale, u.arcsec),
                        lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                        beam_divide_lores=True,
                       )
# fix the last plot a little
pl.gca().set_ylim(1e-1, 1e6)
print("Reported statistics of the ratio of the high-res to low-res data:", stats)
Scales to compare: 40.0 arcsec2-80.0 arcsec2
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):
/Users/adam/repos/uvcombine/uvcombine/uvcombine.py:1414: RuntimeWarning: divide by zero encountered in true_divide
  fft_lo_deconvolved = fft_lo / kfft
/Users/adam/repos/uvcombine/uvcombine/uvcombine.py:1414: RuntimeWarning: invalid value encountered in true_divide
  fft_lo_deconvolved = fft_lo / kfft
Reported statistics of the ratio of the high-res to low-res data: {'median': 0.9169716667841282, 'mean': 0.9844051671978118, 'std': 0.4017283438476936, 'mean_sc': 0.9160895881016673, 'median_sc': 0.9135213443714462, 'std_sc': 0.23195970629359297}

Now the match is better, but still unfortunately pretty awful because of the noise. The histogram shows us why: there are $\lesssim100$ points being compared between the two data sets.

We can also see what the 'real' image is supposed to look like in this space as compared with the single-dish and interferometer image:

inp_fft = np.fft.fftshift(np.fft.fft2(im))
sd_fft = np.fft.fftshift(np.fft.fft2(singledish_im.real))
intf_fft = np.fft.fftshift(np.fft.fft2(im_interferometered.real))
nax2,nax1 = im.shape
yy,xx = np.indices([nax2, nax1])
rr = ((xx-(nax1-1)/2.)**2 + (yy-(nax2-1)/2.)**2)**0.5
angscales = nax1/rr * u.Quantity(pixscale, u.arcsec)
largescales = angscales > 50*u.arcsec
pl.loglog(angscales.to(u.arcsec).value.flat, inp_fft.real.flat, 'k,', alpha=0.1, zorder=2)
pl.loglog(angscales.to(u.arcsec).value.flat, intf_fft.real.flat, 'r,', zorder=1)
pl.loglog(angscales.to(u.arcsec).value.flat, sd_fft.real.flat, 'b,', zorder=-1)

pl.loglog(angscales.to(u.arcsec).value[largescales], inp_fft.real[largescales].flat, 'k.', alpha=0.1, zorder=2)
pl.loglog(angscales.to(u.arcsec).value[largescales], intf_fft.real[largescales].flat, 'r.', zorder=1)
pl.loglog(angscales.to(u.arcsec).value[largescales], sd_fft.real[largescales].flat, 'b.', zorder=-1)

pl.ylim(1e-3, 1e5)
(0.001, 100000.0)

(red = interferometer, blue = single dish, black = "truth")

The above plot looks very messy, but there are a few points worth noting.:

  • the single dish data don't reach 100% flux recovery until a very large scale. Convolution in this case is barely flux conserving.
  • the interferometric data are exactly correct, since they were acquired with an 'ideal' sharply-truncated interferometer

So, the next question: What happens to the resulting image if you get the scale factor wrong?

from uvcombine.uvcombine import feather_simple
combo_correct = feather_simple(intf_hdu,
                               sd_hdu,
                               highresscalefactor=1.0,
                               lowresscalefactor=1.0,
                               lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                               match_units=False,
                              )                        
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):
# Feathered dataset
pl.figure(1, figsize=(16,8)).clf()
pl.subplot(1,2,1)
pl.imshow(combo_correct.real, cmap='viridis')
pl.colorbar()
pl.title("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
pl.subplot(1,2,2)
pl.imshow(im, cmap='viridis')
pl.colorbar()
pl.title("Input pl=1.5 image")


# Feathered dataset
pl.figure(2).clf()
pl.suptitle("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
pl.hist(combo_correct.real.ravel(), bins=50, label='feather')
_ = pl.hist(im.ravel(), bins=50, alpha=0.25, label='input')
leg = pl.legend(loc='best')

In the "perfect combination, everything exactly right" case, it looks fine. The images match well and the systematics in the histogram are fairly minor. We'll now go through a parameter exploration and show how badly things can go:

combo_lo_too_hi = feather_simple(intf_hdu,
                               sd_hdu,
                               highresscalefactor=1.0,
                               lowresscalefactor=2.0,
                               lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                               match_units=False,
                              )

# Feathered dataset
pl.figure(1, figsize=(16,8)).clf()
pl.subplot(1,2,1)
pl.imshow(combo_lo_too_hi.real, cmap='viridis')
pl.colorbar()
pl.title("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
pl.subplot(1,2,2)
pl.imshow(im, cmap='viridis')
pl.colorbar()
pl.title("Input pl=1.5 image")


# Feathered dataset
pl.figure(2).clf()
pl.suptitle("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
_ = pl.hist(combo_lo_too_hi.real.ravel(), bins=50, label='feather')
_ = pl.hist(im.ravel(), bins=50, alpha=0.25, label='input')
leg = pl.legend(loc='best')
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):

This is trivial: the flux is systematically shifted if you incorrectly scale the single-dish data.

combo_hi_too_hi = feather_simple(intf_hdu,
                                 sd_hdu,
                                 highresscalefactor=2.0,
                                 lowresscalefactor=1.0,
                                 lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                                 match_units=False,
                              )

# Feathered dataset
pl.figure(1, figsize=(16,8)).clf()
pl.subplot(1,2,1)
pl.imshow(combo_hi_too_hi.real, cmap='viridis')
pl.colorbar()
pl.title("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
pl.subplot(1,2,2)
pl.imshow(im, cmap='viridis')
pl.colorbar()
pl.title("Input pl=1.5 image")


# Feathered dataset
pl.figure(2).clf()
pl.suptitle("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
_ = pl.hist(combo_hi_too_hi.real.ravel(), bins=50, label='feather')
_ = pl.hist(im.ravel(), bins=50, alpha=0.25, label='input')
leg = pl.legend(loc='best')
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):

If the high-resolution data is given too high a scale factor, the overall flux calibration stays approximately correct, but the high-resolution component of the image gets overemphasized, resulting in a spread of the flux distribution.

We can also check the importance and effectiveness of some of the other parameters.

combo_deconv_sd = feather_simple(intf_hdu,
                                 sd_hdu,
                                 highresscalefactor=1.0,
                                 lowresscalefactor=1.0,
                                 lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                                 deconvSD=True,
                                 match_units=False,
                              )

# Feathered dataset
pl.figure(1, figsize=(16,8)).clf()
pl.subplot(1,2,1)
pl.imshow(combo_deconv_sd.real, cmap='viridis')
pl.colorbar()
pl.title("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
pl.subplot(1,2,2)
pl.imshow(im, cmap='viridis')
pl.colorbar()
pl.title("Input pl=1.5 image")


# Feathered dataset
pl.figure(2).clf()
pl.suptitle("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
_ = pl.hist(combo_deconv_sd.real.ravel(), bins=50, label='feather')
_ = pl.hist(im.ravel(), bins=50, alpha=0.25, label='input')
leg = pl.legend(loc='best')
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):
/Users/adam/repos/uvcombine/uvcombine/uvcombine.py:417: RuntimeWarning: divide by zero encountered in true_divide
  lo_conv = fft_lo / kfft
/Users/adam/repos/uvcombine/uvcombine/uvcombine.py:417: RuntimeWarning: invalid value encountered in true_divide
  lo_conv = fft_lo / kfft

This works alright, but.... not great.

In principle, deconvolving the single-dish data in fourier space by the convolution kernel used to make the single-dish data should result in a better image. That it does not probably means the fourier-space deconvolution is simply not reliable enough (it is severely affected by noise at the small end of the beam).

combo_replace_hires_08 = feather_simple(intf_hdu,
                                        sd_hdu,
                                        highresscalefactor=1.0,
                                        lowresscalefactor=1.0,
                                        lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                                        replace_hires=0.8,
                                        match_units=False,
                              )

# Feathered dataset
pl.figure(1, figsize=(16,8)).clf()
pl.subplot(1,2,1)
pl.imshow(combo_replace_hires_08.real, cmap='viridis')
pl.colorbar()
pl.title("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
pl.subplot(1,2,2)
pl.imshow(im, cmap='viridis')
pl.colorbar()
pl.title("Input pl=1.5 image")


# Feathered dataset
pl.figure(2).clf()
pl.suptitle("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
_ = pl.hist(combo_replace_hires_08.real.ravel(), bins=50, label='feather')
_ = pl.hist(im.ravel(), bins=50, alpha=0.25, label='input')
leg = pl.legend(loc='best')
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):
combo_replace_hires_02 = feather_simple(intf_hdu,
                                        sd_hdu,
                                        highresscalefactor=1.0,
                                        lowresscalefactor=1.0,
                                        lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                                        replace_hires=0.2,
                                        match_units=False,
                              )

# Feathered dataset
pl.figure(1, figsize=(16,8)).clf()
pl.subplot(1,2,1)
pl.imshow(combo_replace_hires_02.real, cmap='viridis')
pl.colorbar()
pl.title("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
pl.subplot(1,2,2)
pl.imshow(im, cmap='viridis')
pl.colorbar()
pl.title("Input pl=1.5 image")


# Feathered dataset
pl.figure(2).clf()
pl.suptitle("Feathered (singledish 40arcsec+interferometer) pl=1.5 image")
_ = pl.hist(combo_replace_hires_02.real.ravel(), bins=50, label='feather')
_ = pl.hist(im.ravel(), bins=50, alpha=0.25, label='input')
leg = pl.legend(loc='best')
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):

This approach is fairly suspect. All it does is take the original single-dish and interferometer images, fourier transform them, then replace the pixels out to a certain radius with the interferometric data.

The above figures show some case studies. Can we assess this more quantitatively?

In order to make the test a little more clear, we'll make the interferometer have "perfect" resolution, i.e., no small-angular-scale cutoff.

# See above
largest_scale = 80.*u.arcsec
image_scale = im.shape[0]*pixscale # assume symmetric (default=256)
ring = (rr>=(image_scale/largest_scale))

imfft = np.fft.fft2(im)
imfft_interferometered = imfft * np.fft.fftshift(ring)
im_interferometered = np.fft.ifft2(imfft_interferometered)
intf_hdu = fits.PrimaryHDU(data=im_interferometered.real, header=header)
diff_sqs = []
for highresscalefactor in np.logspace(-1,1):
    combo = feather_simple(intf_hdu,
                           sd_hdu,
                           highresscalefactor=highresscalefactor,
                           lowresscalefactor=1.0,
                           lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                           match_units=False,
                                  )                        
    diff_sqs.append(((combo.real-im)**2).sum())

pl.loglog(np.logspace(-1,1), diff_sqs)
pl.xlabel("High-resolution scale factor")
pl.ylabel("Squared difference between maps (zero is perfect)")
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):
Text(0,0.5,'Squared difference between maps (zero is perfect)')

Next, let's assess how well feather performs as a function of interferomtric LAS.

# singledish scale is fixed at 40"
diff_sqs = []

for largest_scale in np.linspace(20,256)*u.arcsec:

    image_scale = im.shape[0]*pixscale # assume symmetric (default=256)
    ring = (rr>=(image_scale/largest_scale))

    imfft = np.fft.fft2(im)
    imfft_interferometered = imfft * np.fft.fftshift(ring)
    im_interferometered = np.fft.ifft2(imfft_interferometered)
    intf_hdu = fits.PrimaryHDU(data=im_interferometered.real, header=header)
    
    combo = feather_simple(intf_hdu,
                           sd_hdu,
                           highresscalefactor=1.0,
                           lowresscalefactor=1.0,
                           lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                           match_units=False,
                          )                        
    diff_sqs.append(((combo.real-im)**2).sum())
    
pl.semilogy(np.linspace(20,256), diff_sqs)
pl.xlabel("Interferometric LAS")
pl.ylabel("Squared difference between maps (zero is perfect)")
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):
Text(0,0.5,'Squared difference between maps (zero is perfect)')

Same thing, but with a fixed largest-angular-scale and a changing Single Dish FWHM

largest_scale = 80*u.arcsec #fixed LAS
image_scale = im.shape[0]*pixscale # assume symmetric (default=256)
ring = (rr>=(image_scale/largest_scale))

imfft = np.fft.fft2(im)
imfft_interferometered = imfft * np.fft.fftshift(ring)
im_interferometered = np.fft.ifft2(imfft_interferometered)
intf_hdu = fits.PrimaryHDU(data=im_interferometered.real, header=header)


diff_sqs = []

for lowresfwhm in np.linspace(20,256):

    singledish_im = convolution.convolve_fft(im,
                                             convolution.Gaussian2DKernel(lowresfwhm/2.35),
                                             boundary='fill', fill_value=im.mean())
    sd_hdu = fits.PrimaryHDU(data=singledish_im, header=header)

    combo = feather_simple(intf_hdu,
                           sd_hdu,
                           highresscalefactor=1.0,
                           lowresscalefactor=1.0,
                           lowresfwhm=u.Quantity(lowresfwhm, u.arcsec),
                           match_units=False,
                          )                        
    diff_sqs.append(((combo.real-im)**2).sum())
    
pl.semilogy(np.linspace(20,256), diff_sqs)
pl.xlabel("Single Dish FWHM")
pl.ylabel("Squared difference between maps (zero is perfect)")
/Users/adam/miniconda3/lib/python3.7/site-packages/reproject-0.5-py3.7-macosx-10.7-x86_64.egg/reproject/interpolation/core_celestial.py:26: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if not np.issubdtype(array.dtype, np.float):
Text(0,0.5,'Squared difference between maps (zero is perfect)')

Clearly the single dish FWHM makes a pretty big difference...

Below here are just experiments

We used feather_compare to directly compare images in fourier space. What happens if you compare them in image space after filtering?

  1. Reset to 120" LAS interferometer and 40" FWHM SD
largest_scale = 120*u.arcsec #fixed LAS
image_scale = im.shape[0]*pixscale # assume symmetric (default=256)
ring = (rr>=(image_scale/largest_scale))

imfft = np.fft.fft2(im)
imfft_interferometered = imfft * np.fft.fftshift(ring)
im_interferometered = np.fft.ifft2(imfft_interferometered)
im_hi = im_interferometered.real

intf_hdu = fits.PrimaryHDU(data=im_hi, header=header)

lowresfwhm = 40.
singledish_im = convolution.convolve_fft(im,
                                         convolution.Gaussian2DKernel(lowresfwhm/2.35),
                                         boundary='fill', fill_value=im.mean())
sd_hdu = fits.PrimaryHDU(data=singledish_im, header=header)
SAS = 40*u.arcsec
LAS = 120*u.arcsec


nax2, nax1 = im.shape
lowresfwhm = u.Quantity(lowresfwhm, u.arcsec)
pixscale = u.Quantity(pixscale, u.arcsec)

kfft, ikfft = feather_kernel(nax2, nax1, lowresfwhm, pixscale)
kfft = np.fft.fftshift(kfft)
ikfft = np.fft.fftshift(ikfft)

yy,xx = np.indices([nax2, nax1])
rr = ((xx-(nax1-1)/2.)**2 + (yy-(nax2-1)/2.)**2)**0.5
angscales = nax1/rr * pixscale*u.deg

fft_hi = np.fft.fftshift(np.fft.fft2(np.nan_to_num(im_hi)))
fft_lo = np.fft.fftshift(np.fft.fft2(np.nan_to_num(im_low)))
fft_lo_deconvolved = fft_lo / kfft

min_beam_fraction = 0.05
plot_min_beam_fraction = 0.05
below_beamscale = kfft < min_beam_fraction
below_beamscale_plotting = kfft < plot_min_beam_fraction

mask = (angscales > SAS) & (angscales < LAS) & (~below_beamscale)
assert mask.sum() > 0

hi_img_ring = (np.fft.ifft2(np.fft.fftshift(fft_hi*mask)))
lo_img_ring = (np.fft.ifft2(np.fft.fftshift(fft_lo*mask)))
lo_img_ring_deconv = (np.fft.ifft2(np.fft.fftshift(np.nan_to_num(fft_lo_deconvolved*mask))))
inp_img_ring = np.fft.ifft2(np.fft.fft2(im)*np.fft.fftshift(mask))
/Users/adam/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:19: RuntimeWarning: divide by zero encountered in true_divide
/Users/adam/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:19: RuntimeWarning: invalid value encountered in true_divide
---------------------------------------------------------------------------
UnitConversionError                       Traceback (most recent call last)
~/repos/astropy/astropy/units/quantity_helper/helpers.py in get_converter(from_unit, to_unit)
     31     try:
---> 32         scale = from_unit._to(to_unit)
     33     except UnitsError:

~/repos/astropy/astropy/units/core.py in _to(self, other)
    952         raise UnitConversionError(
--> 953             f"'{self!r}' is not a scaled version of '{other!r}'")
    954 

UnitConversionError: 'Unit("arcsec")' is not a scaled version of 'Unit("arcsec deg")'

During handling of the above exception, another exception occurred:

UnitConversionError                       Traceback (most recent call last)
~/repos/astropy/astropy/units/quantity_helper/helpers.py in get_converters_and_unit(f, unit1, unit2)
     76         try:
---> 77             converters[changeable] = get_converter(unit2, unit1)
     78         except UnitsError:

~/repos/astropy/astropy/units/quantity_helper/helpers.py in get_converter(from_unit, to_unit)
     34         return from_unit._apply_equivalencies(
---> 35                 from_unit, to_unit, get_current_unit_registry().equivalencies)
     36     except AttributeError:

~/repos/astropy/astropy/units/core.py in _apply_equivalencies(self, unit, other, equivalencies)
    889             "{} and {} are not convertible".format(
--> 890                 unit_str, other_str))
    891 

UnitConversionError: 'arcsec' (angle) and 'arcsec deg' (solid angle) are not convertible

During handling of the above exception, another exception occurred:

UnitConversionError                       Traceback (most recent call last)
<ipython-input-58-d9eeea15c9a4> in <module>()
     24 below_beamscale_plotting = kfft < plot_min_beam_fraction
     25 
---> 26 mask = (angscales > SAS) & (angscales < LAS) & (~below_beamscale)
     27 assert mask.sum() > 0
     28 

~/repos/astropy/astropy/units/quantity.py in __array_ufunc__(self, function, method, *inputs, **kwargs)
    445         # consistent units between two inputs (e.g., in np.add) --
    446         # and the unit of the result (or tuple of units for nout > 1).
--> 447         converters, unit = converters_and_unit(function, method, *inputs)
    448 
    449         out = kwargs.get('out', None)

~/repos/astropy/astropy/units/quantity_helper/converters.py in converters_and_unit(function, method, *args)
    164 
    165         # Determine possible conversion functions, and the result unit.
--> 166         converters, result_unit = ufunc_helper(function, *units)
    167 
    168         if any(converter is False for converter in converters):

~/repos/astropy/astropy/units/quantity_helper/helpers.py in helper_twoarg_comparison(f, unit1, unit2)
    277 
    278 def helper_twoarg_comparison(f, unit1, unit2):
--> 279     converters, _ = get_converters_and_unit(f, unit1, unit2)
    280     return converters, None
    281 

~/repos/astropy/astropy/units/quantity_helper/helpers.py in get_converters_and_unit(f, unit1, unit2)
     80                 "Can only apply '{}' function to quantities "
     81                 "with compatible dimensions"
---> 82                 .format(f.__name__))
     83 
     84         return converters, unit1

UnitConversionError: Can only apply 'greater' function to quantities with compatible dimensions

In these images, we compare the "intermediate scales" (the scales in the UV overlap regime) between the interferometric and single-dish data. The interferometric data is from a "perfect interferometer", while the single dish data is from a "realistic" single-dish, i.e., one with a Gaussian beam. The single-dish "intermediate scales" are too smooth (dominated by large angular scales) prior to deconvolution.

pl.figure(1, figsize=(16,5)).clf()
pl.subplot(1,3,1)
pl.imshow((hi_img_ring).real, cmap='viridis')
pl.colorbar()
pl.title("Interferometer mid-scales")
pl.subplot(1,3,2)
pl.imshow((lo_img_ring).real, cmap='viridis')
pl.colorbar()
pl.title("SD mid scales")
pl.subplot(1,3,3)
pl.imshow((lo_img_ring_deconv).real, cmap='viridis')
pl.colorbar()
pl.title("SD mid scales (deconv)")

pl.figure()
pl.imshow((inp_img_ring).real, cmap='viridis')
pl.title("Input mid-scales (back of the book)")

The single-dish and interferometer UV-overlap "intermediate scales" are very closely correlated, but that correlation is noisy.

pl.plot((lo_img_ring_deconv.real).ravel(), (hi_img_ring.real).ravel(), ',', alpha=0.05)
pl.plot([-4,4],[-4,4],'k--')
_=pl.hist((lo_img_ring_deconv.real).ravel() / (hi_img_ring.real).ravel(), bins=np.linspace(0.5,2), log=True)
ratio = (lo_img_ring_deconv.real).ravel() / (hi_img_ring.real).ravel()
sd_weighted_mean_ratio = ((lo_img_ring_deconv.real.ravel())**2 * ratio).sum() / ((lo_img_ring_deconv.real.ravel())**2).sum()
sd_weighted_mean_ratio
ratio = (lo_img_ring_deconv.real).ravel() / (hi_img_ring.real).ravel()
intf_weighted_mean_ratio = ((hi_img_ring.real.ravel())**2 * ratio).sum() / ((hi_img_ring.real.ravel())**2).sum()
intf_weighted_mean_ratio
# find the scale factor that minimizes the distance between the images
diffsq = [((lo_img_ring_deconv.real - scf*hi_img_ring.real)**2).sum() for scf in np.linspace(0.5,2.0)]
pl.plot(np.linspace(0.5,2.0), diffsq)
np.linspace(0.5,2.0)[np.argmin(diffsq)]

Try Singular Value Decomposition to determine the scale factor. Doesn't come very close to a useful answer.

U,S,V = np.linalg.svd(np.array([lo_img_ring_deconv.real.ravel(), hi_img_ring.real.ravel()]).T,
                      full_matrices=False)
V[0,:]/V[1,:]
print("largest_Scale: {0} lowresfwhm: {1}".format(largest_scale, lowresfwhm))
stats = feather_compare(intf_hdu,
                        sd_hdu,
                        SAS=u.Quantity(lowresfwhm,u.arcsec),
                        LAS=u.Quantity(largest_scale,u.arcsec),
                        lowresfwhm=u.Quantity(lowresfwhm,u.arcsec),
                        beam_divide_lores=True,
                       )
print("Fourier-based results: ", stats)

The above plot shows three panels:

(top left): The measured amplitude of the FFT of the low-resolution (single-dish) image on the Y-axis vs the measured ampitude of the FFT of the high-resolution (interferometer) image on the X-axis. Only pixels in the "UV overlap range" defined by the mask mask = (angscales > SAS) & (angscales < LAS) & (~below_beamscale) are included. The 1-1 line, i.e., the perfect match line, is dashed. An approximate best-fit value is shown with the dotted line; this value is reported as 'median_sc' in the results.

(top right): Histogram of the ratio of the high resolution / low resolution data in the masked region (i.e., ratio of the X-axis to the Y-axis of the top-left plot)

(bottom): The amplitude of the high-resolution interferometric (red) and low-resolution, deconvolved single-dish (blue) data versus angular scale. The black curve shows the single-dish kernel. The vertical black lines bracket the "UV overlap" space selected by the largest/smallest angular scale parameters.

The overall conclusion of the above scale factor determination attempts: Both approaches are noisy, but get to within about 10% of the true scale factor (1.0) in this idealized case.

from uvcombine.uvcombine import angular_range_image_comparison

How well does the angular range image comparison perform for different values of the largest/smallest angular scale?

(assuming we always get the FWHM right)

scalefactors = np.zeros([50,50])
for ii,LAS in enumerate(np.linspace(40,220)*u.arcsec):
    
    for jj,SAS in enumerate(np.linspace(10,100)*u.arcsec):
        try:
            scalefactor = angular_range_image_comparison(intf_hdu,
                                                        sd_hdu,
                                                        SAS=SAS,
                                                        LAS=LAS,
                                                        lowresfwhm=u.Quantity(lowresfwhm,u.arcsec),
                                                        beam_divide_lores=True,
                                                         min_beam_fraction=0.005,
                                                       )
            scalefactors[ii,jj] = scalefactor
        except Exception as ex:
            continue
pl.imshow(scalefactors, extent=[10,100,40,220], aspect=1./2); pl.colorbar()
pl.plot([lowresfwhm.value,lowresfwhm.value],[40,220],'k-')
pl.plot([10,100],[largest_scale,largest_scale],'k-')
pl.xlabel("Smallest Angular Scale")
pl.ylabel("Largest Angular Scale")

In this figure, the correct answer is 1.00, i.e., the images were input on identical "flux scales" and need no additional scaling. The vertical and horizontal lines show the "true" LAS and SAS, where SAS = the low resolution FWHM (i.e., it is approximate because it's just a scaling parameter of a Gaussian) while the LAS represents the sharp cutoff of the simulated interferometer.

This is a useful summary figure. As long as you get the largest angular scale approximately correct, to within ~20-30%, you won't miss by more than ~20-30%, but it gets quite bad from there. It is best to guess too low for the LAS, thereby including "too much" flux below the single-dish beam scale; this makes sense since the FWHM is not a strict cutoff.

You're best guessing too high for the smallest angular scale. Something like 25-50% larger than the FWHM is much better than using the exact FWHM.

Narrow ranges are noisy, but they appear to do the best in terms of getting the right answer.

from uvcombine.uvcombine import scale_comparison

Compare the images to the correct answer as a function of scale

On its own, this scale comparison doesn't say much, except that the original and 'feathered' images get pretty close but never converge. This might be useful for comparing a few different methods and seeing how well they each converge.

scales = np.arange(1,64)
chi2s_of_scale = scale_comparison(im, combo.real, scales)
pl.plot(scales, chi2s_of_scale)
scales_bigsteps = np.arange(32,256,32)
chi2s_of_scale_bigsteps = scale_comparison(im, combo.real, scales_bigsteps)
pl.plot(scales_bigsteps, chi2s_of_scale_bigsteps)

So what happens to convergence as you mess up the parameters (i.e., try to reproduce some immerge behavior, and/or reproduce calibration errors)?

pl.figure(figsize=(12,8))
bad_combos = {}
# correct lowresfwhm is 40
for highresscalefactor, lowresscalefactor, lowresfwhm in (
    (1,1,40),
    (1,1.1,40),
    (1,0.9,40),
    (0.9,1.0,40),
    (1.1,1.0,40),
    (1.0,1.0,60),
    (1.0,1.0,30),
):
        combo_ = feather_simple(intf_hdu,
                                sd_hdu,
                                highresscalefactor=highresscalefactor,
                                lowresscalefactor=highresscalefactor,
                                lowresfwhm=lowresfwhm*u.arcsec,
                                match_units=False,
                               )
        scales_bigsteps = np.arange(32,256,32)
        chi2s_of_scale_bigsteps = scale_comparison(im, combo_.real, scales_bigsteps)
        label = (highresscalefactor,lowresscalefactor,lowresfwhm)
        bad_combos[label] = chi2s_of_scale_bigsteps
        pl.plot(scales_bigsteps, chi2s_of_scale_bigsteps, label="HRSF={0} LRSF={1} LRFWHM={2}".format(*label))
        
pl.legend(loc='best')

Conclusion: difference-squared is not a very useful metric to assess image quality. The power-spectrum-based comparisons are better.

Another test: How much noise is added to the image when you miss intermediate scales?

This is set up with a somewhat plausible setup for ALMA:

  • 40" largest-angular scale (e.g., ~7m)
  • 33" single-dish FWHM (Bolocam @ CSO @ 1.1mm)
  • 2" interferometer FWHM (config 1-ish)
# LAS for 7m separation
(1.13*1.3*u.mm/(7*u.m)).to(u.arcsec, u.dimensionless_angles())
from uvcombine.tests import utils
from uvcombine import feather_simple
from astropy import units as u
from astropy.io import fits
from astropy import log
pixel_scale = 1*u.arcsec
log.info("Generate input image")
input_hdu = utils.generate_test_fits(imsize=512, powerlaw=1.5, beamfwhm=2*u.arcsec, pixel_scale=pixel_scale)
log.info("make Interferometric image")
intf_hdu = fits.PrimaryHDU(data=
        utils.interferometrically_observe_image(image=input_hdu.data,
                                                pixel_scale=pixel_scale,
                                                largest_angular_scale=40*u.arcsec,
                                                smallest_angular_scale=2*u.arcsec)[0].real,
                           header=input_hdu.header
                          )
log.info("make SD image")
sd_header = input_hdu.header.copy()
sd_header['BMAJ'] = sd_header['BMIN'] = (33*u.arcsec).to(u.deg).value
sd_hdu = fits.PrimaryHDU(data=utils.singledish_observe_image(image=input_hdu.data,
                                                             pixel_scale=pixel_scale,
                                                             smallest_angular_scale=33*u.arcsec),
                           header=sd_header
                          )
log.info("Feather data")
feathered_hdu = feather_simple(hires=intf_hdu, lores=sd_hdu, return_hdu=True)

Look at the raw results. The problems are pretty obvious.

pl.figure(figsize=(16,3))
pl.subplot(1,4,1)
pl.imshow(input_hdu.data); pl.colorbar()
pl.subplot(1,4,2)
pl.imshow(intf_hdu.data); pl.colorbar()
pl.subplot(1,4,3)
pl.imshow(sd_hdu.data); pl.colorbar()
pl.subplot(1,4,4)
pl.imshow(feathered_hdu.data); pl.colorbar()

Compare the input to output on a linear scale

pl.plot(input_hdu.data.ravel(), feathered_hdu.data.ravel(), ',', alpha=0.25)
pl.plot([0,10],[0,10])

How much noise have we added to the image? Something like 10% of the peak amplitude... not great.

print("Noise added that is not observational but is intrinsic: {0}".format((input_hdu.data-feathered_hdu.data).std()))